# Efficient Distillation Model

Fairyr1 32B
Apache-2.0
FairyR1-32B is an efficient large language model based on DeepSeek-R1-Distill-Qwen-32B, optimized through distillation and merging processes, excelling in mathematical and programming tasks.
Large Language Model Transformers English
F
PKU-DS-LAB
372
85
Deepseek R1 Chinese Law
Apache-2.0
Llama model trained with Unsloth and Huggingface TRL library, achieving 2x faster inference speed
Large Language Model Transformers English
D
corn6
74
2
Travelbot
Apache-2.0
Llama model trained with Unsloth and Huggingface TRL library, achieving 2x inference speed improvement
Large Language Model Transformers English
T
kitty528
9,146
2
Akshara 8B Llama Multilingual V0.1
Apache-2.0
Akshara-8B is a cutting-edge AI model specifically designed for India's multilingual ecosystem, supporting the understanding and generation of various Indian languages.
Large Language Model Transformers Supports Multiple Languages
A
SVECTOR-CORPORATION
91
3
Arcee Blitz
Apache-2.0
A 24B parameter model based on the Mistral architecture, distilled from the DeepSeek model, designed for speed and efficiency.
Large Language Model Transformers
A
arcee-ai
4,923
74
Distil Small.en
MIT
Distil-Whisper is a distilled version of the Whisper model, 6x faster with 49% smaller size, achieving near 1% WER on out-of-distribution evaluation sets.
Speech Recognition Transformers English
D
distil-whisper
33.51k
97
Minilm L12 H384 Uncased
MIT
MiniLM is a compact and efficient pre-trained language model, compressed through deep self-attention distillation technology, suitable for language understanding and generation tasks.
Large Language Model
M
microsoft
10.19k
89
Distilroberta Base
Apache-2.0
DistilRoBERTa is a distilled version of the RoBERTa-base model with fewer parameters but faster speed, suitable for English text processing tasks.
Large Language Model English
D
distilbert
1.2M
153
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase